Skip to content

Conversation

@sfluegel05
Copy link
Collaborator

No description provided.

schnamo and others added 30 commits June 6, 2024 17:41
@schnamo
Copy link
Collaborator

schnamo commented Oct 31, 2025

  • Adding support for regression problems and a wider range of classification problems

@schnamo
Copy link
Collaborator

schnamo commented Nov 3, 2025

add loading from checkpoint pretrained model fix

@sfluegel05
Copy link
Collaborator Author

I added some comments. It would be great if you could have a look at them. Also, you have added quite a number of config files. Some seem to be very specific (e.g. an ELECTRA config with a different learning rate for a specific experiment). My suggestion would be to either remove those configs (and publish it in a paper-specific zenodo archive or mention the parameters in the paper) or group them so that new users don't get overwhelmed (e.g. all moleculenet dataset configs could be one folder).

use_sigmoidal_implication: bool = False,
weight_epoch_dependent: Union[bool | tuple[int, int]] = False,
weight_epoch_dependent: Union[bool, Tuple[int, int]] = False,
weight_epoch_dependent: Union[bool, Tuple[int, int]] = False,
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why does weight_epoch_dependent appear twice here?

if self.pass_loss_kwargs:
loss_kwargs = loss_kwargs_candidates
loss_kwargs["current_epoch"] = self.trainer.current_epoch
# loss_kwargs["current_epoch"] = self.trainer.current_epoch
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is this commented out? Afaik we don't have any loss function at the moment that needs this (this was added for some experimental semantic loss features that didn't perform well). Does this break anything?


from chebai.loss.semantic import DisjointLoss as ElectraChEBIDisjointLoss # noqa
# TODO: put back in before pull request
# from chebai.loss.semantic import DisjointLoss as ElectraChEBIDisjointLoss # noqa
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i guess you wanted to uncomment this :)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this will be a problem for merging. I have added new smiles tokens on a different branch (from pubchem) so the new pubchem-pretrained model (and all models based on that) will depend on those tokens.

Are the tokens you added here actually used by a model or are those just artifacts?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have removed the part in question and will open an issue and look into what is going on with this

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a reason for deleting this file?

restructering of config files
fixing small issues from merging
@schnamo
Copy link
Collaborator

schnamo commented Nov 11, 2025

addressed all comments

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants